-
-
Notifications
You must be signed in to change notification settings - Fork 36
Refactor: Remove offline mode support and update documentation #382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThis PR removes the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
tests/test_cli.py (1)
62-69: Consider a more descriptive test name and additional assertions.The test correctly simulates Ollama unavailability when no API key is set and verifies the failure path. However, the test could be improved for clarity and completeness:
Test name:
test_install_no_api_keydoesn't clearly convey that it's testing the Ollama fallback failure scenario. Consider renaming totest_install_no_api_key_ollama_unavailable.Error message verification: The test only checks the return code but doesn't verify that an appropriate error message is shown to the user. Consider adding an assertion to check that the mocked error message is properly handled.
Suggested improvements
@patch.dict(os.environ, {}, clear=True) - def test_install_no_api_key(self): + def test_install_no_api_key_ollama_unavailable(self): # When no API key is set, the CLI falls back to Ollama. # If Ollama is running, this should succeed. If not, it should fail. # We'll mock Ollama to be unavailable to test the failure case. with patch("cortex.llm.interpreter.CommandInterpreter.parse") as mock_parse: mock_parse.side_effect = RuntimeError("Ollama not available") result = self.cli.install("docker") self.assertEqual(result, 1) + # Verify parse was called (indicating Ollama fallback was attempted) + mock_parse.assert_called_once()docs/ISSUE-268-TESTING.md (1)
56-68: Add Ollama installation prerequisite for Test 4.Test 4 describes using Ollama for true offline operation, but the prerequisites section (lines 5-14) doesn't mention that Ollama needs to be installed on the system. Users attempting Test 4 without Ollama installed will encounter errors.
Suggested addition to prerequisites
Add to the Prereqs section around line 8:
## Prereqs - Python 3.10+ - Project installed in editable mode +- Ollama installed (optional, required only for Test 4) + - Install from: https://ollama.ai/ + - Pull a model: `ollama pull llama2`
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
CHANGELOG.mdcortex/ask.pycortex/cli.pycortex/llm/interpreter.pydocs/COMMANDS.mddocs/ISSUE-268-TESTING.mdtests/test_ask.pytests/test_cli.py
💤 Files with no reviewable changes (4)
- tests/test_ask.py
- cortex/llm/interpreter.py
- docs/COMMANDS.md
- cortex/ask.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_cli.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
🧬 Code graph analysis (2)
tests/test_cli.py (1)
cortex/cli.py (1)
install(314-560)
cortex/cli.py (1)
cortex/llm/interpreter.py (1)
CommandInterpreter(18-387)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Build Package
- GitHub Check: Agent
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (3)
CHANGELOG.md (1)
22-26: LGTM! Clear breaking change documentation.The changelog entry clearly documents the removal of the
--offlineflag and provides helpful migration guidance for users. The explanation of alternatives (semantic cache for cached requests andCORTEX_PROVIDER=ollamafor true offline operation) aligns well with the PR objectives.cortex/cli.py (1)
356-356: LGTM! Correct removal of offline parameter.The instantiation of
CommandInterpretercorrectly omits theofflineparameter, aligning with the updated signature incortex/llm/interpreter.py. The change is minimal and maintains proper integration with the refactored offline behavior.docs/ISSUE-268-TESTING.md (1)
43-55: LGTM! Clear cache verification workflow.Test 3 provides a clear and actionable workflow for verifying cache hits on repeated requests. The expected outcomes are well-defined, and the addition of the explicit
cortex cache statsstep helps users confirm that caching is working correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR removes the redundant --offline CLI flag, streamlining the offline functionality by relying on the existing semantic cache for cached requests and CORTEX_PROVIDER=ollama for true offline operation.
- Removed
--offlineflag and related code from CLI, CommandInterpreter, and AskHandler - Updated documentation to guide users toward the semantic cache and Ollama provider for offline capabilities
- Modified tests to reflect the removal of offline mode functionality
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
cortex/cli.py |
Removed --offline argument parsing, self.offline attribute, and passing offline parameter to handlers |
cortex/llm/interpreter.py |
Removed offline parameter from __init__ and the offline mode check that raised RuntimeError |
cortex/ask.py |
Removed offline parameter from __init__ and the offline mode check that raised RuntimeError |
tests/test_cli.py |
Updated test_install_no_api_key to mock Ollama unavailability instead of testing offline flag behavior |
tests/test_ask.py |
Removed test_ask_offline_no_cache test case that validated offline mode error handling |
docs/COMMANDS.md |
Removed --offline flag from global options documentation |
docs/ISSUE-268-TESTING.md |
Updated testing guide to replace offline flag tests with semantic cache and Ollama provider examples |
CHANGELOG.md |
Added breaking change notice documenting the removal and migration path |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@Suyashd999 |



Related Issue
Closes #380
Summary
Removed the redundant
--offlineflag from the CLI as it duplicates existing offline functionality:CORTEX_PROVIDER=ollamaenables true offline operation with local LLMChanges
--offlineCLI argument and related code fromcortex/cli.pyofflineparameter fromCommandInterpreterandAskHandlerdocs/COMMANDS.md,docs/ISSUE-268-TESTING.md)CHANGELOG.mdMigration
Users who were using
--offlineshould:cortex install <package>(cache works automatically)export CORTEX_PROVIDER=ollamaChecklist
pytest tests/) - 847 passed, 10 skipped--offlineremoved from help outputSummary by CodeRabbit
Release Notes
Breaking Changes
--offlineflag from CLI. Offline functionality now requires settingCORTEX_PROVIDER=ollamaenvironment variable.Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.